Using gradient directions to get global convergence of Newton-type methods

نویسندگان

چکیده

The renewed interest in Steepest Descent (SD) methods following the work of Barzilai and Borwein [2] has driven us to consider a globalization strategy based on SD, which is applicable any line-search method. In particular, we combine Newton-type directions with scaled SD steps have suitable descent directions. Scaling step length makes significant difference respect similar approaches, terms both theoretical features computational behavior. We apply our Newton’s method BFGS method, results that appear interesting compared well-established strategies devised ad hoc for those methods.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Global Convergence of Perry-Shanno Memoryless Quasi-Newton-type Method

In this paper, we analyze the global convergence properties of Perry-Shanno memoryless quasiNewton type method associating with a new line search model. preliminary numerical results are also reported.

متن کامل

Convergence analysis of inexact proximal Newton-type methods

We study inexact proximal Newton-type methods to solve convex optimization problems in composite form: minimize x∈Rn f(x) := g(x) + h(x), where g is convex and continuously differentiable and h : R → R is a convex but not necessarily differentiable function whose proximal mapping can be evaluated efficiently. Proximal Newton-type methods require the solution of subproblems to obtain the search ...

متن کامل

An Approach for Analyzing the Global Rate of Convergence of Quasi-Newton and Truncated-Newton Methods

Quasi-Newton and truncated-Newton methods are popular methods in optimization, and are traditionally seen as useful alternatives to the gradient and Newton methods. Throughout the literature, results are found that link quasi-Newton methods to certain first-order methods under various assumptions. We offer a simple proof to show that a range of quasi-Newton methods are first-order methods in th...

متن کامل

Global Convergence of Conjugate Gradient Methods without Line Search

Global convergence results are derived for well-known conjugate gradient methods in which the line search step is replaced by a step whose length is determined by a formula. The results include the following cases: 1. The Fletcher-Reeves method, the Hestenes-Stiefel method, and the Dai-Yuan method applied to a strongly convex LC objective function; 2. The Polak-Ribière method and the Conjugate ...

متن کامل

Global Convergence of Policy Gradient Methods for Linearized Control Problems

Direct policy gradient methods for reinforcement learning and continuous control problems are a popular approach for a variety of reasons: 1) they are easy to implement without explicit knowledge of the underlying model 2) they are an “end-to-end” approach, directly optimizing the performance metric of interest 3) they inherently allow for richly parameterized policies. A notable drawback is th...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Applied Mathematics and Computation

سال: 2021

ISSN: ['1873-5649', '0096-3003']

DOI: https://doi.org/10.1016/j.amc.2020.125612